58 research outputs found

    Making the Best of Workplace Diversity:From the Management Level to the Employee Level

    Get PDF

    Bio-inspired Neural Networks for Angular Velocity Estimation in Visually Guided Flights

    Get PDF
    Executing delicate flight maneuvers using visual information is a huge challenge for future robotic vision systems. As a source of inspiration, insects are quite apt at navigating in woods and landing on surfaces which require delicate visual perception and flight control. The exquisite sensitivity of insects for image motion speed, as revealed recently, is coming from a class of specific neurons called descending neurons. Some of the descending neurons have demonstrated angular velocity selectivity as the image motion speed varies in retina. Build a quantitative angular velocity detection model is the first step for not only further understanding of the biological visual system, but also providing robust and economic solutions of visual motion perception for an artificial visual system. This thesis aims to explore biological image processing methods for motion speed detection in visually guided flights. The major contributions are summarized as follows. We have presented an angular velocity decoding model (AVDM), which estimates the visual motion speed combining both textural and temporal information from input signals. The model consists of three parts: elementary motion detection circuits, wide-field texture estimation pathway and angular velocity decoding layer. The model estimates the angular velocity very well with improved spatial frequency independence compared to the state-of-the-art angular velocity detecting models, when firstly tested by moving sinusoidal gratings. This spatial independence is vital to account for the honeybee’s flight behaviors. We have also investigated the spatial and temporal resolutions of honeybees to get a bio-plausible parameter setting for explaining these behaviors. To investigate whether the model can account for observations of tunnel centering behaviors of honeybees, the model has been implemented in a virtual bee simulated by the game engine Unity. The simulation results of a series of experiments show that the agent can adjust its position to fly through patterned tunnels by balancing the angular velocities estimated on both eyes under several circumstances. All tunnel stimulations reproduce similar behaviors of real bees, which indicate that our model does provide a possible explanation for estimating the image velocity and can be used for MAV’s flight course regulation in tunnels. What’s more, to further verify the robustness of the model, the visually guided terrain following simulations have been carried out with a closed-loop control scheme to restore a preset angular velocity during the flight. The simulation results of successfully flying over the undulating terrain verify the feasibility and robustness of the AVDM performing in various application scenarios, which shows its potential in applications of micro aerial vehicle’s terrain following. In addition, we have also applied the AVDM in grazing landing using only visual information. A LGMD neuron is also introduced to avoid collision and to trigger the hover phase, which ensures the safety of landing. By applying honeybee’s landing strategy of keeping constant angular velocity, we have designed a close-loop control scheme with an adaptive gain to control landing dynamic using AVDM response as input. A series of controlled trails have been designed in Unity platform to demonstrate the effectiveness of the proposed model and control scheme for visual landing under various conditions. The proposed model could be implemented into real small robots to investigate the robustness in real landing scenarios in near future

    Improved Collision Perception Neuronal System Model with Adaptive Inhibition Mechanism and Evolutionary Learning

    Get PDF
    Accurate and timely perception of collision in highly variable environments is still a challenging problem for artificial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust’s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesdefinedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, flexibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots

    Visual Cue Integration for Small Target Motion Detection in Natural Cluttered Backgrounds

    Get PDF
    The robust detection of small targets against cluttered background is important for future artificial visual systems in searching and tracking applications. The insects’ visual systems have demonstrated excellent ability to avoid predators, find prey or identify conspecifics – which always appear as small dim speckles in the visual field. Build a computational model of the insects’ visual pathways could provide effective solutions to detect small moving targets. Although a few visual system models have been proposed, they only make use of small-field visual features for motion detection and their detection results often contain a number of false positives. To address this issue, we develop a new visual system model for small target motion detection against cluttered moving backgrounds. Compared to the existing models, the small-field and wide-field visual features are separately extracted by two motion-sensitive neurons to detect small target motion and background motion. These two types of motion information are further integrated to filter out false positives. Extensive experiments showed that the proposed model can outperform the existing models in terms of detection rates

    A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments

    Get PDF
    Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this paper, we propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop, and find it shows preference for high-velocity objects. Extensive experiments suggest that the Feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening

    A bioinspired angular velocity decoding neural network model for visually guided flights

    Get PDF
    Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model’s potential for implementation in micro air vehicles which have only visual sensors

    Angular Velocity Estimation of Image Motion Mimicking the Honeybee Tunnel Centring Behaviour

    Get PDF
    Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of flight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not fulfilled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee flying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee’s image motion detection mechanism guiding the tunnel centring behaviour

    Attention and Prediction-Guided Motion Detection for Low-Contrast Small Moving Targets

    Get PDF
    Small target motion detection within complex natural environments is an extremely challenging task for autonomous robots. Surprisingly, the visual systems of insects have evolved to be highly efficient in detecting mates and tracking prey, even though targets occupy as small as a few degrees of their visual fields. The excellent sensitivity to small target motion relies on a class of specialized neurons called small target motion detectors (STMDs). However, existing STMD-based models are heavily dependent on visual contrast and perform poorly in complex natural environments where small targets generally exhibit extremely low contrast against neighbouring backgrounds. In this paper, we develop an attention and prediction guided visual system to overcome this limitation. The developed visual system comprises three main subsystems, namely, an attention module, an STMD-based neural network, and a prediction module. The attention module searches for potential small targets in the predicted areas of the input image and enhances their contrast against complex background. The STMD-based neural network receives the contrast-enhanced image and discriminates small moving targets from background false positives. The prediction module foresees future positions of the detected targets and generates a prediction map for the attention module. The three subsystems are connected in a recurrent architecture allowing information to be processed sequentially to activate specific areas for small target detection. Extensive experiments on synthetic and real-world datasets demonstrate the effectiveness and superiority of the proposed visual system for detecting small, low-contrast moving targets against complex natural environments

    Prognostic analysis of cT1-3N1M0 breast cancer patients who have responded to neoadjuvant therapy undergoing various axillary surgery and breast surgery based on propensity score matching and competitive risk model

    Get PDF
    BackgroundSentinel lymph node biopsy (SLNB) in breast cancer patients with positive clinical axillary lymph nodes (cN1+) remains a topic of controversy. The aim of this study is to assess the influence of various axillary and breast surgery approaches on the survival of cN1+ breast cancer patients who have responded positively to neoadjuvant therapy (NAT).MethodsPatients diagnosed with pathologically confirmed invasive ductal carcinoma of breast between 2010 and 2020 were identified from the Surveillance, Epidemiology, and End Results (SEER) database. To mitigate confounding bias, propensity score matching (PSM) analysis was employed. Prognostic factors for both overall survival (OS) and breast cancer-specific survival (BCSS) were evaluated through COX regression risk analysis. Survival curves were generated using the Kaplan-Meier method. Furthermore, cumulative incidence and independent prognostic factors were assessed using a competing risk model.ResultsThe PSM analysis matched 4,890 patients. Overall survival (OS) and BCSS were slightly worse in the axillary lymph node dissection (ALND) group (HR = 1.10, 95% CI 0.91-1.31, p = 0.322 vs. HR = 1.06, 95% CI 0.87-1.29, p = 0.545). The mastectomy (MAST) group exhibited significantly worse OS and BCSS outcomes (HR = 1.25, 95% CI 1.04-1.50, p = 0.018 vs. HR = 1.37, 95% CI 1.12-1.68, p = 0.002). The combination of different axillary and breast surgery did not significantly affect OS (p = 0.083) but did have a significant impact on BCSS (p = 0.019). Competing risk model analysis revealed no significant difference in the cumulative incidence of breast cancer-specific death (BCSD) in the axillary surgery group (Grey’s test, p = 0.232), but it showed a higher cumulative incidence of BCSD in the MAST group (Grey’s test, p = 0.001). Multivariate analysis demonstrated that age ≥ 70 years, black race, T3 stage, ER-negative expression, HER2-negative expression, and MAST were independent prognostic risk factors for both OS and BCSS (all p < 0.05).ConclusionFor cN1+ breast cancer patients who respond positive to NAT, the optimal surgical approach is combining breast-conserving surgery (BCS) with SLNB. This procedure improves quality of life and long-term survival outcomes
    corecore